4 results
Toward a Normative Model of Meaningful Human Control over Weapons Systems
- Daniele Amoroso, Guglielmo Tamburrini
-
- Journal:
- Ethics & International Affairs / Volume 35 / Issue 2 / Summer 2021
- Published online by Cambridge University Press:
- 19 August 2021, pp. 245-272
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
The notion of meaningful human control (MHC) has gathered overwhelming consensus and interest in the autonomous weapons systems (AWS) debate. By shifting the focus of this debate to MHC, one sidesteps recalcitrant definitional issues about the autonomy of weapons systems and profitably moves the normative discussion forward. Some delegations participating in discussions at the Group of Governmental Experts on Lethal Autonomous Weapons Systems meetings endorsed the notion of MHC with the proviso that one size of human control does not fit all weapons systems and uses thereof. Building on this broad suggestion, we propose a “differentiated”—but also “principled” and “prudential”—framework for MHC over weapons systems. The need for a differentiated approach—namely, an approach acknowledging that the extent of normatively required human control depends on the kind of weapons systems used and contexts of their use—is supported by highlighting major drawbacks of proposed uniform solutions. Within the wide space of differentiated MHC profiles, distinctive ethical and legal reasons are offered for principled solutions that invariably assign to humans the following control roles: (1) “fail-safe actor,” contributing to preventing the weapon's action from resulting in indiscriminate attacks in breach of international humanitarian law; (2) “accountability attractor,” securing legal conditions for international criminal law (ICL) responsibility ascriptions; and (3) “moral agency enactor,” ensuring that decisions affecting the life, physical integrity, and property of people involved in armed conflicts be exclusively taken by moral agents, thereby alleviating the human dignity concerns associated with the autonomous performance of targeting decisions. And the prudential character of our framework is expressed by means of a rule, imposing by default the more stringent levels of human control on weapons targeting. The default rule is motivated by epistemic uncertainties about the behaviors of AWS. Designated exceptions to this rule are admitted only in the framework of an international agreement among states, which expresses the shared conviction that lower levels of human control suffice to preserve the fail-safe actor, accountability attractor, and moral agency enactor requirements on those explicitly listed exceptions. Finally, we maintain that this framework affords an appropriate normative basis for both national arms review policies and binding international regulations on human control of weapons systems.
6 - On banning autonomous weapons systems: from deontological to wide consequentialist reasons
- from PART III - Autonomous weapons systems and human dignity
-
- By Guglielmo Tamburrini, University of Naples Federico II
- Edited by Nehal Bhuta, European University Institute, Florence, Susanne Beck, Universität Hannover, Germany, Robin Geiβ, University of Glasgow, Hin-Yan Liu, European University Institute, Florence, Claus Kreβ, Universität zu Köln
-
- Book:
- Autonomous Weapons Systems
- Published online:
- 05 August 2016
- Print publication:
- 19 August 2016, pp 122-142
-
- Chapter
- Export citation
-
Summary
Introduction
This chapter examines the ethical reasons supporting a moratorium and, more stringently, a pre-emptive ban on autonomous weapons systems (AWS). Discussions of AWS presuppose a relatively clear idea of what it is that makes those systems autonomous. In this technological context, the relevant type of autonomy is task autonomy, as opposed to personal autonomy, which usually pervades ethical discourse. Accordingly, a weapons system is regarded here as autonomous if it is capable of carrying out the task of selecting and engaging military targets without any human intervention.
Since robotic and artificial intelligence technologies are crucially needed to achieve the required task autonomy in most battlefield scenarios, AWS are identified here with some sort of robotic systems. Thus, ethical issues about AWS are strictly related to technical and epistemological assessments of robotic technologies and systems, at least insofar as the operation of AWS must comply with discrimination and proportionality requirements of international humanitarian law (IHL). A variety of environmental and internal control factors are advanced here as major impediments that prevent both present and foreseeable robotic technologies from meeting IHL discrimination and proportionality demands. These impediments provide overwhelming support for an AWS moratorium – that is, for a suspension of AWS development, production and deployment at least until the technology becomes sufficiently mature with respect to IHL. Discrimination and proportionality requirements, which are usually motivated on deontological grounds by appealing to the fundamental rights of the potential victims, also entail certain moral duties on the part of the battlefield actors. Hence, a moratorium on AWS is additionally supported by a reflection on the proper exercise of these duties – military commanders ought to refuse AWS deployment until the risk of violating IHL is sufficiently low.
Public statements about AWS have often failed to take into account the technical and epistemological assessments of state-of-the-art robotics, which provide support for an AWS moratorium. Notably, some experts of military affairs have failed to convey in their public statements the crucial distinction between the expected short-term outcomes of research programmes on AWS and their more ambitious and distant goals. Ordinary citizens, therefore, are likely to misidentify these public statements as well-founded expert opinions and to develop, as a result, unwarranted beliefs about the technological advancements and unrealistic expectations about IHL-compliant AWS.
Computation and the explanation of intelligent behaviours: ethologically motivated restart
- Edited by Alessandro Andretta, Università degli Studi di Torino, Italy, Keith Kearnes, University of Colorado, Boulder, Domenico Zambella, Università degli Studi di Torino, Italy
-
- Book:
- Logic Colloquium 2004
- Published online:
- 05 July 2014
- Print publication:
- 10 December 2007, pp 168-186
-
- Chapter
- Export citation
-
Summary
Abstract. Computational theorizing is fruitfully pursued in the investigation of sensorimotor coordination mechanisms of simple biological systems, such as unicellular organism chemotaxis. These investigations undermine the sweeping claim according to which intelligent and adaptive behaviours in biological systems are to be accounted for in terms of continuous systems. Moreover, these investigations suggest the opportunity of developing a more fine-grained framework for analyzing the hierarchical interplay between computational, dynamical, and hybrid models of adaptive behaviours in both biological systems and machines. Key epistemological issues arising in this context of inquiry are clearly identified in Turing's and von Neumann's early reflections on the computational modelling of intelligent behaviours and brain functions.
Introduction. A variety of sensorimotor coordination mechanisms are being successfully modelled on the basis of continuous dynamical system approaches (Beer [1997]; Steinhage and Bergener [2000]; Turvey and Carello [1995]). This work is invoked as empirical support for a sweeping “dynamicist” thesis: intelligent and adaptive biological behaviours are to be ultimately accounted for in terms of continuous (dynamical) systems; properly computational investigations make approximate simulation tools available for dynamical theories but play no essential theoretical role (Port and Gelder [1995]). Similar claims about mathematical theorizing in cognitive ethology and biology at large can be found in (Beer [1995], Steels [1995]) and (Longo [2003]), respectively. These claims are critically examined here, in the light of theoretical models of simple sensorimotor adaptive behaviours. These case studies are particularly suited to our purposes, for continuous dynamical approaches are supposedly at their best in the modelling of sensorimotor coordination mechanisms. Computational approaches, we submit, are being fruitfully pursued there too.
Biorobotic Experiments for the Discovery of Biological Mechanisms
- Edoardo Datteri, Guglielmo Tamburrini
-
- Journal:
- Philosophy of Science / Volume 74 / Issue 3 / July 2007
- Published online by Cambridge University Press:
- 01 January 2022, pp. 409-430
- Print publication:
- July 2007
-
- Article
- Export citation
-
Robots are being extensively used for the purpose of discovering and testing empirical hypotheses about biological sensorimotor mechanisms. We examine here methodological problems that have to be addressed in order to design and perform “good” experiments with these machine models. These problems notably concern the mapping of biological mechanism descriptions into robotic mechanism descriptions; the distinction between theoretically unconstrained “implementation details” and robotic features that carry a modeling weight; the role of preliminary calibration experiments; the monitoring of experimental environments for disturbing factors that affect both modeling features and theoretically unconstrained implementation details of robots. Various assumptions that are gradually introduced in the process of setting up and performing these robotic experiments become integral parts of the background hypotheses that are needed to bring experimental observations to bear on biological mechanism descriptions.